Creating new pipeline using seurat v4.0.2 available 2021.06.08
Load libraries required for Seuratv4
knitr::opts_knit$set(root.dir = "~/Desktop/10XGenomicsData/msAggr_scRNASeq/")
library(dplyr)
library(Seurat)
library(patchwork)
library(ggplot2)
library(clustree)
store session info
sink("msAggr_seurat-v1.20210608")
sessionInfo()
sink()
ScaleDatahttps://bioconductor.org/packages/3.10/workflows/vignettes/simpleSingleCell/inst/doc/batch.html#62_for_gene-based_analyses >You can also normalize and scale data for the RNA assay. There are numerous resources on this, but Aaron Lun describes why the original log-normalized values should be used for DE and visualizations of expression quite well here: > >For gene-based procedures like differential expression (DE) analyses or gene network construction, it is desirable to use the original log-expression values or counts. The corrected values are only used to obtain cell-level results such as clusters or trajectories. Batch effects are handled explicitly using blocking terms or via a meta-analysis across batches. We do not use the corrected values directly in gene-based analyses, for various reasons: > >It is usually inappropriate to perform DE analyses on batch-corrected values, due to the failure to model the uncertainty of the correction. This usually results in loss of type I error control, i.e., more false positives than expected. > >The correction does not preserve the mean-variance relationship. Applications of common DE methods like edgeR or limma are unlikely to be valid. > >Batch correction may (correctly) remove biological differences between batches in the course of mapping all cells onto a common coordinate system. Returning to the uncorrected expression values provides an opportunity for detecting such differences if they are of interest. Conversely, if the batch correction made a mistake, the use of the uncorrected expression values provides an important sanity check. > >In addition, the normalized values in SCT and integrated assays don’t necessary correspond to per-gene expression values anyway, rather containing residuals (in the case of the scale.data slot for each).
Mess with how to load 4 cell populations into single seurat object
SET SEED?????!!!!!
projectName <- "msAggr"
jackstraw.dim <- 40
projectName <- "msAggr"
jackstraw.dim <- 40
source("msAggr_AnalysisCode/read_10XGenomics_data.R")
setwd("../cellRanger/")
Warning: The working directory was changed to /Users/heustonef/Desktop/10XGenomicsData/cellRanger inside a notebook chunk. The working directory will be reset when the chunk is finished running. Use the knitr root.dir option in the setup chunk to change the working directory for notebook chunks.
data_file.list <- read_10XGenomics_data(sample.list = c("LSKm2", "CMPm2", "MEPm", "GMPm"))
seurat.object.data<-Read10X(data_file.list)
seurat.object<- create_percentMito_column(seurat.object)
Error in create_percentMito_column(seurat.object) :
could not find function "create_percentMito_column"
Clean up to free memory
remove(seurat.object.data)
Add mitochondrial metadata and plot some basic features
seurat.object[["percent.mt"]] <- PercentageFeatureSet(seurat.object, pattern = "^mt-")
VlnPlot(seurat.object, features = c("nFeature_RNA", "nCount_RNA", "percent.mt"), ncol = 3, pt.size = 0, fill.by = 'orig.ident')
plot1 <- FeatureScatter(seurat.object, feature1 = "nCount_RNA", feature2 = "percent.mt", group.by = "orig.ident", pt.size = 0.01)
plot2 <- FeatureScatter(seurat.object, feature1 = "nCount_RNA", feature2 = "nFeature_RNA", group.by = "orig.ident", pt.size = 0.01)
plot1 + plot2
remove low quality cells require: nFeature_RNA between 200 and 4000 (inclusive) –require: percent.mt <5–???
print(paste("original object:", nrow(seurat.object@meta.data), "cells", sep = " "))
[1] "original object: 41950 cells"
seurat.object <- subset(seurat.object,
subset = nFeature_RNA >=200 &
nFeature_RNA <= 4000
)
print(paste("new object:", nrow(seurat.object@meta.data), "cells", sep = " "))
[1] "new object: 40815 cells"
Struggling to wrap my head around this one. It seems that SCTransform is best for batch correction, but NormalizeData and ScaleData are best for DGE. Several vignettes have performed both
`selection.method
How to choose top variable features. Choose one of :
vst: First, fits a line to the relationship of log(variance) and log(mean) using local polynomial regression (loess). Then standardizes the feature values using the observed mean and expected variance (given by the fitted line). Feature variance is then calculated on the standardized values after clipping to a maximum (see clip.max parameter).
mean.var.plot (mvp): First, uses a function to calculate average expression (mean.function) and dispersion (dispersion.function) for each feature. Next, divides features into num.bin (deafult 20) bins based on their average expression, and calculates z-scores for dispersion within each bin. The purpose of this is to identify variable features while controlling for the strong relationship between variability and average expression.
dispersion (disp): selects the genes with the highest dispersion values`
seurat.object <- NormalizeData(seurat.object, normalization.method = "LogNormalize", scale.factor = 10000)
Performing log-normalization
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
seurat.object <- FindVariableFeatures(seurat.object, selection.method = "vst", nfeatures = 2000)
Calculating gene variances
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Calculating feature variances of standardized and clipped values
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Find variable features
seurat.object <- FindVariableFeatures(seurat.object, selection.method = "vst", nfeatures = 2000)
Calculating gene variances
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Calculating feature variances of standardized and clipped values
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
top10 <- head(VariableFeatures(seurat.object), 10)
plot1 <- VariableFeaturePlot(seurat.object)
plot2 <- LabelPoints(plot = plot1, points = top10, repel = TRUE)
When using repel, set xnudge and ynudge to 0 for optimal results
plot1 + plot2
Scale data (linear transformation)
all.genes <- rownames(seurat.object)
seurat.object <- ScaleData(seurat.object, features = all.genes)
Centering and scaling data matrix
|
| | 0%
|
|====== | 5%
|
|============ | 11%
|
|================== | 16%
|
|========================= | 21%
save.image(file = paste0(projectName, '.RData'))
linear dimensional reduction. Default are based on VariableFeatures, but can be changed
seurat.object <- RunPCA(seurat.object, features = VariableFeatures(object = seurat.object))
VizDimLoadings(seurat.object, dims = 1:6, nfeatures = 10, reduction = "pca", ncol = 2)
DimPlot colored by orig.ident
Let’s put in a concerted effort to pick the right dimensionality using the newest software
save.image(paste0(projectName, ".RData"))
Error in paste0(projectName, ".RData") : object 'projectName' not found
Draw dim.reduction plots
JackStrawPlot(seurat.object, dims = 25:36)
Warning: Removed 18381 rows containing missing values (geom_point).
ElbowPlot(seurat.object, ndims = 50)
percent.variance(seurat.object@reductions$pca@stdev)
Number of PCs describing X% of variance
ElbowPlot(seurat.object, ndims = 50)
percent.variance(seurat.object@reductions$pca@stdev)
36 PCs describe 95% percent of variance, but one of the earlier critiques was that too many dimensions were included. Best way to deal with this is probably to try a couple of different ones. Let’s start with 36 because I dare to rebel.
tot.var <- percent.variance(seurat.object@reductions$pca@stdev, plot.var = FALSE, return.val = TRUE)
paste0("Num pcs for 80% variance:", length(which(cumsum(tot.var) <= 80)))
[1] "Num pcs for 80% variance:11"
paste0("Num pcs for 85% variance:", length(which(cumsum(tot.var) <= 85)))
[1] "Num pcs for 85% variance:16"
paste0("Num pcs for 90% variance:", length(which(cumsum(tot.var) <= 90)))
[1] "Num pcs for 90% variance:24"
paste0("Num pcs for 95% variance:", length(which(cumsum(tot.var) <= 95)))
[1] "Num pcs for 95% variance:36"
Generate UMAP data
seurat.object <- FindNeighbors(seurat.object, dims = 1:36)
Computing nearest neighbor graph
Computing SNN
seurat.object <- FindClusters(seurat.object, resolution = 0.5)
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 40815
Number of edges: 1546414
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.9063
Number of communities: 15
Elapsed time: 9 seconds
Plot the results
seurat.object <- RunUMAP(seurat.object, dims = 1:36)
Warning: The default method for RunUMAP has changed from calling Python UMAP via reticulate to the R-native UWOT using the cosine metric
To use Python UMAP via reticulate, set umap.method to 'umap-learn' and metric to 'correlation'
This message will be shown once per session
10:13:32 UMAP embedding parameters a = 0.9922 b = 1.112
10:13:33 Read 40815 rows and found 36 numeric columns
10:13:33 Using Annoy for neighbor search, n_neighbors = 30
10:13:33 Building Annoy index with metric = cosine, n_trees = 50
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
10:13:36 Writing NN index file to temp file /var/folders/4f/fwrj6fnn1dn4g8wsf0zv563hjsvl24/T//RtmppBwCob/filed75535a2180
10:13:36 Searching Annoy index using 1 thread, search_k = 3000
10:13:46 Annoy recall = 100%
10:13:48 Commencing smooth kNN distance calibration using 1 thread
10:13:50 Initializing from normalized Laplacian + noise
10:13:52 Commencing optimization for 200 epochs, with 1790844 positive edges
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
10:14:11 Optimization finished
DimPlot(seurat.object,
reduction = "umap"
) + ggtitle("msAggr dim36 res0.5")
DimPlot(seurat.object,
reduction = "umap",
group.by = "orig.ident"
) + ggtitle("msAggt dim36 orig.ident")
36 dimensions looks very “swoopy”. might have asked for too many dimensions. Will try dim = 24, which accounts for 90% of variance
saveRDS(seurat.object, file = "msAggr_dim36.rds")
Plot the results
seurat.object <- FindNeighbors(seurat.object, dims = 1:24)
Computing nearest neighbor graph
Computing SNN
seurat.object <- FindClusters(seurat.object, resolution = 0.5)
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 40815
Number of edges: 1436767
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.9066
Number of communities: 14
Elapsed time: 10 seconds
seurat.object <- RunUMAP(seurat.object, dims = 1:24)
11:02:14 UMAP embedding parameters a = 0.9922 b = 1.112
11:02:14 Read 40815 rows and found 24 numeric columns
11:02:14 Using Annoy for neighbor search, n_neighbors = 30
11:02:14 Building Annoy index with metric = cosine, n_trees = 50
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
11:02:18 Writing NN index file to temp file /var/folders/4f/fwrj6fnn1dn4g8wsf0zv563hjsvl24/T//RtmppBwCob/filed75730bd234
11:02:18 Searching Annoy index using 1 thread, search_k = 3000
11:02:28 Annoy recall = 100%
11:02:29 Commencing smooth kNN distance calibration using 1 thread
11:02:31 Initializing from normalized Laplacian + noise
11:02:33 Commencing optimization for 200 epochs, with 1731126 positive edges
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
11:02:52 Optimization finished
DimPlot(seurat.object,
reduction = "umap"
) + ggtitle("msAggr dim24 res0.5")
DimPlot(seurat.object,
reduction = "umap",
group.by = "orig.ident"
) + ggtitle("msAggt dim24 orig.ident")
24 vs 36 dimensions doesn’t seem to make that much difference RE overall relationships between cells. Should probbaly create two separate objects and try a few plotting methods, but will start by focusing on 36 dimensions.
Will proceed with dim = 36 and do clustering analysis with a range of clusters. Later can do the tree-based over-clustering assessment
seurat.object <- readRDS("msAggr_dim36.rds")
Picking range of resolutions
for(x in c(0.5, 1, 1.5, 2, 2.5)){
seurat.object <- FindClusters(seurat.object, resolution = x)
}
seurat.object <- RunUMAP(seurat.object, dims = 1:36)
Plot the clustering over different resolutions. Going to need a much better color palette, or try mixing in some different symbols
for(x in c(0.5, 1, 1.5, 2, 2.5)){
seurat.object <- FindClusters(seurat.object, resolution = x)
}
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 40815
Number of edges: 1546414
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.9063
Number of communities: 15
Elapsed time: 8 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 40815
Number of edges: 1546414
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8716
Number of communities: 23
Elapsed time: 8 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 40815
Number of edges: 1546414
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8463
Number of communities: 27
Elapsed time: 7 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 40815
Number of edges: 1546414
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8267
Number of communities: 36
Elapsed time: 7 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 40815
Number of edges: 1546414
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8097
Number of communities: 42
Elapsed time: 8 seconds
seurat.object <- RunUMAP(seurat.object, dims = 1:36)
Warning: The default method for RunUMAP has changed from calling Python UMAP via reticulate to the R-native UWOT using the cosine metric
To use Python UMAP via reticulate, set umap.method to 'umap-learn' and metric to 'correlation'
This message will be shown once per session
15:19:06 UMAP embedding parameters a = 0.9922 b = 1.112
15:19:06 Read 40815 rows and found 36 numeric columns
15:19:06 Using Annoy for neighbor search, n_neighbors = 30
15:19:06 Building Annoy index with metric = cosine, n_trees = 50
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
15:19:10 Writing NN index file to temp file /var/folders/4f/fwrj6fnn1dn4g8wsf0zv563hjsvl24/T//RtmpebgnbZ/fileb3657e9aac
15:19:10 Searching Annoy index using 1 thread, search_k = 3000
15:19:19 Annoy recall = 100%
15:19:20 Commencing smooth kNN distance calibration using 1 thread
15:19:21 Initializing from normalized Laplacian + noise
15:19:24 Commencing optimization for 200 epochs, with 1790844 positive edges
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
15:19:43 Optimization finished
for (meta.col in colnames(seurat.object@meta.data)){
if(grepl(pattern = ("RNA_snn_res"), x = meta.col)==TRUE){
myplot <- DimPlot(seurat.object,
group.by = meta.col,
reduction = "umap",
cols = colorRamps::primary.colors(n = length(levels(seurat.object@meta.data[[meta.col]])))
) +
ggtitle(paste0("msAggr dim36 res", gsub("RNA_snn_res", "", meta.col) ))
plot(myplot)
}
}
Must ensure we have the right cluster stability, that is, cells that start in the same cluster tend to stay in the same cluster. If your data is over-clustered, cells will bounce between groups.
Following [this tutorial by Matt O.].https://towardsdatascience.com/10-tips-for-choosing-the-optimal-number-of-clusters-277e93d72d92.
Previously my favourite has been Clustree, which gives a nice visual NB: For some reason clustree::clustree() didn’t work, whereas library(clustree) followed by clustree() did.
saveRDS(seurat.object, file = "msAggr_dim36.rds")
clustree(seurat.object, prefix = "RNA_snn_res.", node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
clustree(seurat.object, prefix = "RNA_snn_res.", expres = 'data', node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
clustree(seurat.object, prefix = "RNA_snn_res.", expres = 'scale.data', node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
These data suggest that node stability is aweful! Need to figure out if this is a dimensional reduction error or a clustering error.
Differences could include: * cells in each population (cellranger v6 includes more cells than cellranger v1, especially in MEP) * dimensionality is incorrect * ScaleData didnt account for regression factors (e.g., “nCounts_RNA” or “nFeatures_RNA”) * Incorrect normalization/scaling method * Clustering is too strict or not strict enough * neighborhood analysis used wrong parameters * Should include mitoC filter (there’s a chunk of MEP w/ mitoC @ ~40%) * SCTransform accounts better for sources of variability
clustree(seurat.object, prefix = "RNA_snn_res.", expres = 'counts', node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
sapply(c("LSKm2", "CMPm2", "MEPm", "GMPm"), function(x) (c(nrow(seurat.object@meta.data[seurat.object@meta.data$orig.ident == x,]))))
LSKm2 CMPm2 MEPm GMPm
11004 12342 8791 8678
Looks like MEPm is the only sample with that huge MitoC % lump @ 40%. What do these cells look like, otherwise?
for (x in c("LSKm2", "CMPm2", "MEPm", "GMPm")){
h = hist(seurat.object@meta.data[seurat.object@meta.data$orig.ident == x, 'percent.mt'], breaks = 30, plot = FALSE)
h$density = h$counts/sum(h$counts)*100
plot(h,freq=FALSE, main = paste(x, "percent mitoC"), xlab = "percent mitoC", ylab = "Frequency")
}
Save dim36 as is and try clustering analysis @ dim24
VlnPlot(subset(seurat.object, subset = orig.ident == "MEPm"),
features = c("nFeature_RNA", "nCount_RNA", "percent.mt"), ncol = 1, pt.size = 0, fill.by = 'ident', flip = TRUE)
One possibility is that I’ve included too many dimensions. Will see if 90% increases stability.
saveRDS(seurat.object, file = "msAggr_AnalysisCode/msAggr_dim36.rds")
Save object
saveRDS(seurat.object, file = "msAggr_dim24.rds")
saveRDS(seurat.object, file = "msAggr_dim24.rds")
Must ensure we have the right cluster stability, that is, cells that start in the same cluster tend to stay in the same cluster. If your data is over-clustered, cells will bounce between groups.
Following [this tutorial by Matt O.].https://towardsdatascience.com/10-tips-for-choosing-the-optimal-number-of-clusters-277e93d72d92.
Previously my favourite has been Clustree, which gives a nice visual NB: For some reason clustree::clustree() didn’t work, whereas library(clustree) followed by clustree() did.
for (meta.col in colnames(seurat.object@meta.data)){
if(grepl(pattern = ("RNA_snn_res"), x = meta.col)==TRUE){
myplot <- DimPlot(seurat.object,
group.by = meta.col,
reduction = "umap",
cols = colorRamps::primary.colors(n = length(levels(seurat.object@meta.data[[meta.col]])))
) +
ggtitle(paste0("msAggr dim36 res", gsub("RNA_snn_res", "", meta.col) ))
plot(myplot)
}
}
Think I’ll explore regression factors using SCTransform in new document.